55 research outputs found

    Adversarial Attacks on Deep Neural Networks for Time Series Classification

    Full text link
    Time Series Classification (TSC) problems are encountered in many real life data mining tasks ranging from medicine and security to human activity recognition and food safety. With the recent success of deep neural networks in various domains such as computer vision and natural language processing, researchers started adopting these techniques for solving time series data mining problems. However, to the best of our knowledge, no previous work has considered the vulnerability of deep learning models to adversarial time series examples, which could potentially make them unreliable in situations where the decision taken by the classifier is crucial such as in medicine and security. For computer vision problems, such attacks have been shown to be very easy to perform by altering the image and adding an imperceptible amount of noise to trick the network into wrongly classifying the input image. Following this line of work, we propose to leverage existing adversarial attack mechanisms to add a special noise to the input time series in order to decrease the network's confidence when classifying instances at test time. Our results reveal that current state-of-the-art deep learning time series classifiers are vulnerable to adversarial attacks which can have major consequences in multiple domains such as food safety and quality assurance.Comment: Accepted at IJCNN 201

    Deep learning for time series classification: a review

    Get PDF
    Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.Comment: Accepted at Data Mining and Knowledge Discover

    Transfer learning for time series classification

    Full text link
    Transfer learning for deep neural networks is the process of first training a base network on a source dataset, and then transferring the learned features (the network's weights) to a second network to be trained on a target dataset. This idea has been shown to improve deep neural network's generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike for image recognition problems, transfer learning techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved if the model is fine-tuned from a pre-trained neural network instead of training it from scratch. In this paper, we fill this gap by investigating how to transfer deep CNNs for the TSC task. To evaluate the potential of transfer learning, we performed extensive experiments using the UCR archive which is the largest publicly available TSC benchmark containing 85 datasets. For each dataset in the archive, we pre-trained a model and then fine-tuned it on the other datasets resulting in 7140 different deep neural networks. These experiments revealed that transfer learning can improve or degrade the model's predictions depending on the dataset used for transfer. Therefore, in an effort to predict the best source dataset for a given target dataset, we propose a new method relying on Dynamic Time Warping to measure inter-datasets similarities. We describe how our method can guide the transfer to choose the best source dataset leading to an improvement in accuracy on 71 out of 85 datasets.Comment: Accepted at IEEE International Conference on Big Data 201

    Généralisation du diagramme de Voronoï et placement de formes géométriques complexes dans un nuage de points.

    Get PDF
    La géométrie algorithmique est une discipline en pleine expansion dont l'objet est la conception d'algorithmes résolvant des problèmes géométriques. De tels algorithmes sont très utiles notamment dans l'ingénierie, l'industrie et le multimédia. Pour être performant, il est fréquent qu'un algorithme géométrique utilise des structures de données spécialisées.Nous nous sommes intéressés à une telle structure : le diagramme de Voronoï et avons proposé une généralisation de celui-ci. Ladite généralisation résulte d'une extension du prédicat du disque vide (prédicat propre à toute région de Voronoï) à une union de disques. Nous avons analysé les régions basées sur le prédicat étendu et avons proposé des méthodes pour les calculer par ordinateur.Par ailleurs, nous nous sommes intéressés aux problèmes de placement de formes , thème récurrent en géométrie algorithmique. Nous avons introduit un formalisme universel pour de tels problèmes et avons, pour la première fois, proposé une méthode de résolution générique, en ce sens qu'elle est apte à résoudre divers problèmes de placement suivant un même algorithme.Nos travaux présentent, d'une part, l'avantage d'élargir le champ d'application de structures de données basées sur Voronoï. D'autre part, ils facilitent de manière générale l'utilisation de la géométrie algorithmique, en unifiant définitions et algorithmes associés aux problèmes de placement de formes.Computational geometry is an active branch of computer science whose goal is the design of efficient algorithms solving geometric problems. Such algorithms are useful in domains like engineering, industry and multimedia. In order to be efficient, algorithms often use special data structures.In this thesis we focused on such a structure: the Voronoi diagram. We proposed a new generalized diagram. We have proceeded by extending the empty disk predicate (satisfied by every Voronoi region) to an arbitrary union of disks. We have analyzed the new plane regions based on the extended predicate, and we designed algorithms for computing them.Then, we have considered another topic, which is related to the first one: shape placement problems. Such problems have been studied repeatedly by researchers in computational geometry. We introduced new notations along with a global framework for such problems. We proposed, for the first time a generic method, which is able to solve various placement problems using a single algorithm.Thus, our work extend the scope of Voronoi based data structures. It also simplifies the practical usage of placement techniques by unifying the associated definitions and algorithms.MULHOUSE-SCD Sciences (682242102) / SudocSudocFranceF

    GPU parallelization strategies for metaheuristics: a survey

    Get PDF
    Metaheuristics have been showing interesting results in solving hard optimization problems. However, they become limited in terms of effectiveness and runtime for high dimensional problems. Thanks to the independency of metaheuristics components, parallel computing appears as an attractive choice to reduce the execution time and to improve solution quality. By exploiting the increasing performance and programability of graphics processing units (GPUs) to this aim, GPU-based parallel metaheuristics have been implemented using different designs. RecentresultsinthisareashowthatGPUstendtobeeffectiveco-processors forleveraging complex optimization problems.In thissurvey, mechanisms involvedinGPUprogrammingforimplementingparallelmetaheuristicsare presentedanddiscussedthroughastudyofrelevantresearchpapers. Metaheuristics can obtain satisfying results when solving optimization problems in a reasonable time. However, they suffer from the lack of scalability. Metaheuristics become limited ahead complex highdimensional optimization problems. To overcome this limitation, GPU based parallel computing appears as a strong alternative. Thanks to GPUs, parallelmetaheuristicsachievedbetterresultsintermsofcomputation,and evensolutionquality

    Hybrid differential evolution algorithms for the optimal camera placement problem

    Get PDF
    Purpose – This paper investigates to what extent hybrid differential evolution (DE) algorithms can be successful in solving the optimal camera placement problem. Design/methodology/approach – This problem is stated as a unicost set covering problem (USCP) and 18 problem instances are defined according to practical operational needs. Three methods are selected from the literature to solve these instances: a CPLEX solver, a greedy algorithm, and a row weighting local search (RWLS). Then, it is proposed to hybridize these algorithms with two DE approaches designed for combinatorial optimization problems. The first one is a set-based approach (DEset) from the literature. The second one is a new similarity-based approach (DEsim) that takes advantage of the geometric characteristics of a camera in order to find better solutions. Findings – The experimental study highlights that RWLS and DEsim-CPLEX are the best proposed algorithms. Both easily outperform CPLEX, and it turns out that RWLS performs better on one class of problem instances, whereas DEsim-CPLEX performs better on another class, depending on the minimal resolution needed in practice. Originality/value – Up to now, the efficiency of RWLS and the DEset approach has been investigated only for a few problems. Thus, the first contribution is to apply these methods for the first time in the context of camera placement. Moreover, new hybrid DE algorithms are proposed to solve the optimal camera placement problem when stated as a USCP. The second main contribution is the design of the DEsim approach that uses the distance between camera locations in order to fully benefit from the DE mutation scheme

    Méthodes algorithmiques pour l'allocation de fréquences

    No full text
    Texte intégral accessible uniquement aux membres de l'Université de LorraineGenerating a frequency plan is a difficult task in the radionetwork planning process, that may lead to obtain frequency plans with poor efficiency under real propagation conditions. In fact, the generation process uses a modelling of existing constraints between transmitters of the radionetwork under study, and a combinatorial optimization that tries to satisfy those constraints. This combinatorial optimization provides an optimal solution from a mathematical viewpoint, but according to the refinement of constraints modelling, the generated solution can be unusable under real propagation conditions. In this thesis, we intoduce new algorithmic approaches to solve the Frequency Assignment Problem in the field of radiobroadcasting. The experiments performed on real radionetworks show that the results obtained by these approaches are better than the best operational solutions that are existing in this domain.La génération d'un plan de fréquences est une tâche difficile dans le processus de planification, qui peut conduire à des plans de fréquences peu adéquats d'un point de vue métier. En effet, le processus de génération s'appuie d'une part sur une modélisation des contraintes existant entre les points de service du réseau étudié, et d'autre part sur une optimisation combinatoire qui vise à satisfaire ces contraintes. Cette optimisation combinatoire fournit une solution optimale d'un point de vue mathématique, mais selon la finesse de modélisation des contraintes, la solution générée peut être inutilisable dans la réalité. Dans cette thèse, nous présentons de nouvelles méthodes algorithmiques permettant de résoudre efficacement le problème d'allocation de fréquences dans le cadre de la radiodiffusion. L'utilisation de ces nouvelles approches a montré que la qualité des solutions obtenues est nettement meilleure que les meilleures solutions opérationnelles existant dans ce domaine
    • …
    corecore